perm filename AIPHIL.PRO[W79,JMC] blob sn#421832 filedate 1979-02-28 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.require "memo.pub[let,jmc]" source
C00011 00003	.<<"I have your letter of January 19 indicating that you'd like to pursue
C00012 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source

	As you may know, we are arranging a group project at the Center
for 1979-80 on the general topic of artificial intelligence and
philosophy.  The interdisciplinary group begins with the premise that both
artificial intelligence and philosophy are concerned with intelligent
behavior in physical and biological systems.  Both face conceptual
problems in characterizing behavior.  On the philosophical side, Daniel
Dennett, among others, has characterized intentional systems which are
physical systems to which can be ascribed intentional qualities such as
beliefs and wants.  On the artificial intelligence side, John McCarthy has
indentified conditions when mental qualities can be ascribed to machines.

	Artificial intelligence helps the philosopher, because intelligent
programs provide a domain where behavior is precisely defined but to which
one must ascribe some intellectual qualities if one is to describe what
one knows about its behavior.  For example, what a particular person knows
about the state of a particular computer operating system may be expressed
as ascribing to the program an incorrect belief that a certain user of the
system does not want to run his program.  From the artificial intelligence
point of view, a program that plans travel must know that travel agents
know airline schedules and must know that the gate at which a flight will
leave an intermediate stop is not knowable initially, but can easily be
discovered by an English-speaking traveller in the United States at the
time the information will be required.

	While no one expects to solve all the philosophical puzzles
concerning knowledge and wants in the near future, joint work by
philosophers and artificial intelligence people can identify and solve
some of the easier problems.  Sorting out the problems into easy and hard
will benefit both philosophy and artificial intelligence and facilitate
bridging the gap between the abstract world of philosophy and artificial
intelligence and practical real world problems of education and industry.
Cognitive psychology has already benefitted from the concreteness of
artificial intelligence systems and will also benefit from the
identification and solution of the more straightforward problems of
knowledge, wanting and obligation.

	The scientists who are currently scheduled to participate in this
group include:  Professor John McCarthy (Chairman), Computer Science,
Stanford University; Daniel Dennett, Philosophy, Tufts University; John
Haugeland, Philosophy, University of Pittsburgh; Patrick Hayes, Computer
Sciences, University of Essex; Marvin Minsky, Computer Science-Electrical
Engineering, M.I.T.; Robert Moore, Computer Science-Engineering, SRI
International; and Zenon W. Pylyshyn, Psychology, University of Western
Ontario.

	Some of the issues to be addressed by the study are
.item←0

	#. What challenges must a system meet, before its behavior
can be considered intelligent?

	#. What if any is the relation between the unsolvability and
incompleteness results of logic and recursive function theory and
the possibility of intelligent machines?

	#. Is intelligent behavior
all of a piece, or can a system posess some important aspects
of intelligent behavior and not others?

	#. If the latter, what are these components of intelligent
behavior?

	#. What constitutes a common sense knowledge of the world
apart from specialized knowledge but sufficient so that specialized
knowledge can be added?

	#. Specifically, what are the basic facts about events
occurring in time, about causality and about the effects of actions?

	#. Is there a sense in which the deep knowledge of these
topics that philosophers are trying to obtain can be bypassed?

	#. How much knowledge of the world is required to understand
ordinary language?

	These issues are basic to cognitive science.
The generality of artificial intelligence (AI) systems has been
limited by overly special views of causality, knowledge, belief and
action.  This is especially true of systems using natural language,
because it is at present impossible to give a clear statement of
the level of understanding such systems must have.  Therefore, the
critic has to invent proposed counterexamples, i.e. to invent
some request that a human with common sense can do, but the
system cannot.

	The group is the first prolonged joint study involving
both artficial intelligence researchers and philosophers.  The
participants are motivated by finding that they have been
thinking on parallel lines about a number of problems and are
looking forward to the opportunity to interact more closely.
However, each of the researchers has many problems about which
he has already been thinking and writing and on which he knows
how he can make progress.  Therefore, even more than is the case
with most interdisciplinary studies, the participants are
uncertain of what form the interaction can best take.  For this
reason, although we plan to begin with a seminar meeting several
times per week that will allow the participants to familiarize
themselves quickly with the ideas of the others, no specific
output is demanded.

	Because the interest in AI and philosophy extends far
beyond the seven participants, we have also planned an October meeting
for about 40 invitees.
.<<"I have your letter of January 19 indicating that you'd like to pursue
.the possibility of an officer grant this year and see if a larger grant
.might be possible next year.  To do this, I'll need a proposal, which may
.be in the form of a letter something like your letter of December 20 but with
.a bit more detail.  In particular, the nature of the issues to be addressed
.should be amplified.  There should also be soee indication of what might
.result from the project, particularly the effects on the development of
.cognitive science". - from Klivington letter of February 2>>